allen ai
Allen AI & UW Propose Unified-IO: A High-Performance, Task-Agnostic Model for CV, NLP, and Multi-Modal Tasks
Building a general-purpose unified model that can solve diverse tasks in different modalities while maintaining high performance is a long-standing challenge in the machine learning research community. A conventional approach in this direction is building models with task-specialized heads on top of a shared architectural backbone -- but such models require expert knowledge to design a specialized head for each task, and their lack of parameter-sharing for new tasks limits their transfer-learning capabilities. In the new paper Unified-IO: A Unified Model for Vision, Language, and Multi-Modal Tasks, a research team from the Allen Institute for AI and the University of Washington introduces UNIFIED-IO, a neural model with no task- or modality-specific branches that achieves competitive performance across a wide variety of computer vision (CV), natural language processing (NLP), and multi-modal benchmark tasks without fine-tuning. The researchers set out to build a unified neural architecture that ML practitioners with little or no knowledge of the underlying machinery could use to efficiently and effectively train their models for new NLP and CV tasks. For models to support a variety of modalities (images, language, boxes, binary masks, segmentation, etc.), they must represent all modalities in a shared space.
Exploring the COVID-19 Open Research Dataset with Lucy Lu Wang from Allen AI (Practical AI #86)
Yeah, so the entire project is a coordinated effort by the White House Office of Science and Technology Policy. I think some time in early March a group at Georgetown, the Center for Security in Emerging Technology (CSET) reached out to us at Allen AI to help coordinate the release of this dataset, along with a couple of different organizations. You mentioned MSR (Microsoft Research), Chan Zuckerberg, Kaggle was also involved, and the National Library of Medicine, which is part of the NIH. So all these groups - we're going to come together to essentially create this dataset to help create text mining and information retrieval tools that could assist medical experts in understanding more of what was going on with the epidemic. For Allen AI, the way that we got involved is we had recently created a new pipeline to revamp our open research corpus.
Google Search Now Reads at a Higher Level
Google search is advancing a reading grade. Google says it has enhanced its search-ranking system with software called BERT, or Bidirectional Encoder Representations from Transformers to its friends. It was developed in the company's artificial intelligence labs and announced last fall, breaking records on reading comprehension questions that researchers use to test AI software. Pandu Nayak, Google's vice president of search, said at a briefing Thursday that the muppet-monickered software has made Google's search algorithm much better at handling long queries, or ones where the relationships between words are crucial. You're now less likely to get frustrating responses to queries dependent on prepositions like for" and "to," or negations such as "not" or "no." "This is the single biggest positive change we've had in the last five years," Nayak said--at least according to Google's measures of how ranking changes help people find what they want.
- South America > Brazil (0.07)
- North America > United States (0.05)
- Information Technology > Services (0.95)
- Education > Assessment & Standards > Student Performance (0.57)